Goto

Collaborating Authors

 robot agent


LLM-Guided Reinforcement Learning: Addressing Training Bottlenecks through Policy Modulation

Tan, Heng, Yan, Hua, Yang, Yu

arXiv.org Artificial Intelligence

While reinforcement learning (RL) has achieved notable success in various domains, training effective policies for complex tasks remains challenging. Agents often converge to local optima and fail to maximize long-term rewards. Existing approaches to mitigate training bottlenecks typically fall into two categories: (i) Automated policy refinement, which identifies critical states from past trajectories to guide policy updates, but suffers from costly and uncertain model training; and (ii) Human-in-the-loop refinement, where human feedback is used to correct agent behavior, but this does not scale well to environments with large or continuous action spaces. In this work, we design a large language model-guided policy modulation framework that leverages LLMs to improve RL training without additional model training or human intervention. We first prompt an LLM to identify critical states from a sub-optimal agent's trajectories. Based on these states, the LLM then provides action suggestions and assigns implicit rewards to guide policy refinement. Experiments across standard RL benchmarks demonstrate that our method outperforms state-of-the-art baselines, highlighting the effectiveness of LLM-based explanations in addressing RL training bottlenecks.


Enforcing Cybersecurity Constraints for LLM-driven Robot Agents for Online Transactions

Shah, Shraddha Pradipbhai, Deshpande, Aditya Vilas

arXiv.org Artificial Intelligence

The integration of Large Language Models (LLMs) into autonomous robotic agents for conducting online transactions poses significant cybersecurity challenges. This study aims to enforce robust cybersecurity constraints to mitigate the risks associated with data breaches, transaction fraud, and system manipulation. The background focuses on the rise of LLM-driven robotic systems in e-commerce, finance, and service industries, alongside the vulnerabilities they introduce. A novel security architecture combining blockchain technology with multi-factor authentication (MFA) and real-time anomaly detection was implemented to safeguard transactions. Key performance metrics such as transaction integrity, response time, and breach detection accuracy were evaluated, showing improved security and system performance. The results highlight that the proposed architecture reduced fraudulent transactions by 90%, improved breach detection accuracy to 98%, and ensured secure transaction validation within a latency of 0.05 seconds. These findings emphasize the importance of cybersecurity in the deployment of LLM-driven robotic systems and suggest a framework adaptable to various online platforms.


UBSoft: A Simulation Platform for Robotic Skill Learning in Unbounded Soft Environments

Lin, Chunru, Fan, Jugang, Wang, Yian, Yang, Zeyuan, Chen, Zhehuan, Fang, Lixing, Wang, Tsun-Hsuan, Xian, Zhou, Gan, Chuang

arXiv.org Artificial Intelligence

It is desired to equip robots with the capability of interacting with various soft materials as they are ubiquitous in the real world. While physics simulations are one of the predominant methods for data collection and robot training, simulating soft materials presents considerable challenges. Specifically, it is significantly more costly than simulating rigid objects in terms of simulation speed and storage requirements. These limitations typically restrict the scope of studies on soft materials to small and bounded areas, thereby hindering the learning of skills in broader spaces. To address this issue, we introduce UBSoft, a new simulation platform designed to support unbounded soft environments for robot skill acquisition. Our platform utilizes spatially adaptive resolution scales, where simulation resolution dynamically adjusts based on proximity to active robotic agents. Our framework markedly reduces the demand for extensive storage space and computation costs required for large-scale scenarios involving soft materials. We also establish a set of benchmark tasks in our platform, including both locomotion and manipulation tasks, and conduct experiments to evaluate the efficacy of various reinforcement learning algorithms and trajectory optimization techniques, both gradient-based and sampling-based. Preliminary results indicate that sampling-based trajectory optimization generally achieves better results for obtaining one trajectory to solve the task. Additionally, we conduct experiments in real-world environments to demonstrate that advancements made in our UBSoft simulator could translate to improved robot interactions with large-scale soft material. More videos can be found at https://vis-www.cs.umass.edu/ubsoft/.


An Efficient Multi-Robot Arm Coordination Strategy for Pick-and-Place Tasks using Reinforcement Learning

Jermann, Tizian, Kolvenbach, Hendrik, Estay, Fidel Esquivel, Kramer, Koen, Hutter, Marco

arXiv.org Artificial Intelligence

LASTIC pollution in rivers has become a pressing global issue, with 11 million tons of plastic waste entering the ocean annually, 80% of which is caused by 1,000 major polluting rivers [1]. To address this problem, it is desired to develop a solution capable of removing plastic and other waste objects without interfering with the existing flora and fauna essential to river ecosystems [2] . Our Autonomous River Cleanup (ARC) project, initiated in 2019, leverages robotics and automation to remove plastic waste from rivers. In order to increase the capacity at which this can be done, we enhance the existing single arm sorting station [3] with additional robot arms. For multiple robot agents to efficiently sort waste on a conveyor belt, we develop and evaluate novel strategy algorithms using reinforcement learning that assign pick-and-place (PnP) tasks to the respective robot agents (Figure 1). Given a set of objects on the moving conveyor belt, the robot agents are tasked with removing waste objects, whilst bio-matter is ignored and collected at the end of the belt. The challenge is to allocate each robot optimally with PnP operations for objects within its reachable workspace.


Continual Skill and Task Learning via Dialogue

Gu, Weiwei, Kondepudi, Suresh, Huang, Lixiao, Gopalan, Nakul

arXiv.org Artificial Intelligence

Continual and interactive robot learning is a challenging problem as the robot is present with human users who expect the robot to learn novel skills to solve novel tasks perpetually with sample efficiency. In this work we present a framework for robots to query and learn visuo-motor robot skills and task relevant information via natural language dialog interactions with human users. Previous approaches either focus on improving the performance of instruction following agents, or passively learn novel skills or concepts. Instead, we used dialog combined with a language-skill grounding embedding to query or confirm skills and/or tasks requested by a user. To achieve this goal, we developed and integrated three different components for our agent. Firstly, we propose a novel visual-motor control policy ACT with Low Rank Adaptation (ACT-LoRA), which enables the existing SoTA ACT model to perform few-shot continual learning. Secondly, we develop an alignment model that projects demonstrations across skill embodiments into a shared embedding allowing us to know when to ask questions and/or demonstrations from users. Finally, we integrated an existing LLM to interact with a human user to perform grounded interactive continual skill learning to solve a task. Our ACT-LoRA model learns novel fine-tuned skills with a 100% accuracy when trained with only five demonstrations for a novel skill while still maintaining a 74.75% accuracy on pre-trained skills in the RLBench dataset where other models fall significantly short. We also performed a human-subjects study with 8 subjects to demonstrate the continual learning capabilities of our combined framework. We achieve a success rate of 75% in the task of sandwich making with the real robot learning from participant data demonstrating that robots can learn novel skills or task knowledge from dialogue with non-expert users using our approach.


A Comparative Analysis of Interactive Reinforcement Learning Algorithms in Warehouse Robot Grid Based Environment

Bora, Arunabh

arXiv.org Artificial Intelligence

The field of warehouse robotics is currently in high demand, with major technology and logistics companies making significant investments in these advanced systems. Training robots to operate in such complex environments is challenging, often requiring human supervision for adaptation and learning. Interactive reinforcement learning (IRL) is a key training methodology in human-computer interaction. This paper presents a comparative study of two IRL algorithms: Q-learning and SARSA, both trained in a virtual grid-simulation-based warehouse environment. To maintain consistent feedback rewards and avoid bias, feedback was provided by the same individual throughout the study.


Enhancing Human-Robot Collaborative Assembly in Manufacturing Systems Using Large Language Models

Lim, Jonghan, Patel, Sujani, Evans, Alex, Pimley, John, Li, Yifei, Kovalenko, Ilya

arXiv.org Artificial Intelligence

The development of human-robot collaboration has the ability to improve manufacturing system performance by leveraging the unique strengths of both humans and robots. On the shop floor, human operators contribute with their adaptability and flexibility in dynamic situations, while robots provide precision and the ability to perform repetitive tasks. However, the communication gap between human operators and robots limits the collaboration and coordination of human-robot teams in manufacturing systems. Our research presents a human-robot collaborative assembly framework that utilizes a large language model for enhancing communication in manufacturing environments. The framework facilitates human-robot communication by integrating voice commands through natural language for task management. A case study for an assembly task demonstrates the framework's ability to process natural language inputs and address real-time assembly challenges, emphasizing adaptability to language variation and efficiency in error resolution. The results suggest that large language models have the potential to improve human-robot interaction for collaborative manufacturing assembly applications.


Understanding Large-Language Model (LLM)-powered Human-Robot Interaction

Kim, Callie Y., Lee, Christine P., Mutlu, Bilge

arXiv.org Artificial Intelligence

Large-language models (LLMs) hold significant promise in improving human-robot interaction, offering advanced conversational skills and versatility in managing diverse, open-ended user requests in various tasks and domains. Despite the potential to transform human-robot interaction, very little is known about the distinctive design requirements for utilizing LLMs in robots, which may differ from text and voice interaction and vary by task and context. To better understand these requirements, we conducted a user study (n = 32) comparing an LLM-powered social robot against text- and voice-based agents, analyzing task-based requirements in conversational tasks, including choose, generate, execute, and negotiate. Our findings show that LLM-powered robots elevate expectations for sophisticated non-verbal cues and excel in connection-building and deliberation, but fall short in logical communication and may induce anxiety. We provide design implications both for robots integrating LLMs and for fine-tuning LLMs for use with robots.


A Closed-Loop Multi-perspective Visual Servoing Approach with Reinforcement Learning

Zhang, Lei, Pei, Jiacheng, Bai, Kaixin, Chen, Zhaopeng, Zhang, Jianwei

arXiv.org Artificial Intelligence

Traditional visual servoing methods suffer from serving between scenes from multiple perspectives, which humans can complete with visual signals alone. In this paper, we investigated how multi-perspective visual servoing could be solved under robot-specific constraints, including self-collision, singularity problems. We presented a novel learning-based multi-perspective visual servoing framework, which iteratively estimates robot actions from latent space representations of visual states using reinforcement learning. Furthermore, our approaches were trained and validated in a Gazebo simulation environment with connection to OpenAI/Gym. Through simulation experiments, we showed that our method can successfully learn an optimal control policy given initial images from different perspectives, and it outperformed the Direct Visual Servoing algorithm with mean success rate of 97.0%.


A behavioural transformer for effective collaboration between a robot and a non-stationary human

Mon-Williams, Ruaridh, Stouraitis, Theodoros, Vijayakumar, Sethu

arXiv.org Artificial Intelligence

A key challenge in human-robot collaboration is the non-stationarity created by humans due to changes in their behaviour. This alters environmental transitions and hinders human-robot collaboration. We propose a principled meta-learning framework to explore how robots could better predict human behaviour, and thereby deal with issues of non-stationarity. On the basis of this framework, we developed Behaviour-Transform (BeTrans). BeTrans is a conditional transformer that enables a robot agent to adapt quickly to new human agents with non-stationary behaviours, due to its notable performance with sequential data. We trained BeTrans on simulated human agents with different systematic biases in collaborative settings. We used an original customisable environment to show that BeTrans effectively collaborates with simulated human agents and adapts faster to non-stationary simulated human agents than SOTA techniques.